Testing a New AI-Integrated Dev Workflow at Work
At work, I recently proposed a new workflow to better integrate AI tools into our development process. Here’s the outline of what we came up with:
- The PM writes a PRD (Product Requirements Document) in Markdown on GitHub. 
- Developers and stakeholders review the PRD using GitHub’s review features. 
- Once finalized, developers generate detailed project specs, including design docs for each feature. 
- Test cases are written with acceptance criteria baked directly into the tests. 
- During development, engineers feed these documents into their AI coding assistants (I use Cursor). 
- If we rely on information outside the codebase, we may need to create MCPs. Creating MCPs could eventually become part of our official responsibilities. 
- Developers audit the AI’s output. 
- QA is conducted. 
- Release. 
We’re currently piloting this workflow with a small new feature. The goal is to evaluate how well it works, especially in terms of AI effectiveness and development efficiency.
Key questions we’re trying to answer during this trial:
- What’s the optimal task size for an AI assistant to handle without generating nonsense?
- What additional context can we feed the AI to reduce hallucinations?
- How much time can this workflow actually save?